1,145 research outputs found

    A refined version of general E-unification

    Get PDF
    Transformation--based systems for general E-unification were first investigated by Gallier and Snyder. Their system extends the well--known rules for syntactic unification by Lazy Paramodulation, thus coping with the equational theory. More recently, Dougherty and Johann improved on this method by giving a restriction of the Lazy Paramodulation inferences. In this paper, we show that their system can be further improved by a stronger restriction on the applicability of Lazy Paramodulation. It turns out that the framework of proof transformations provides an elegant and natural means for proving completeness of the inference system

    Semi-unification

    No full text
    Semi-unifiability is a generalization of both unification and matching. It is used to check nontermination of rewrite rules. In this paper an inference system is presented that decides semi-unifiability of two terms s and t and computes a semi-unifier. In contrast to an algorithm by Kapur, Musser et al, this inference system comes very close to the one for ordinary unification

    Unification of terms with exponents

    No full text
    In an ICALP (1991) paper, H. Chen and J. Hsiang introduced a notion that allows for a finite representation of certain infinite sets of terms. These so called w-terms find an application in logic programming, where they can serve to represent finitely an infinite number of answers or to avoid nontermination in certain cases. Another application is in the field of equational logic. Using w-terms, it is possible to avoid a certain type of divergence of ordered completion. In all cases, unification is the basic computational aspect of this notation. Chen and Hsiang give a complete and terminating unification algorithm for w-terms. Recently, H. Comon introduced terms with exponents, thus significantly extending Chen and Hsiang's notion of w-terms. He provides a fairly complicated unification algorithm. This paper introduces a further syntactic generalization of Comon's notion together with a comparatively simple inference system for unification

    Automatic synthesis of decision procedures

    Get PDF

    Completeness of resolution and superposition calculi

    No full text
    We modify Bezem's (Bezem, M. Completeness of Resolution Revisited. Theoretical Computer Science 74 (1990) 227-237) completeness proof for ground resolution in order to deal with ordered resolution, redundancy, and equational reasoning in form of superposition. The resulting proof is completely independent of the cardinality of the set of clauses

    A Deep Relevance Matching Model for Ad-hoc Retrieval

    Full text link
    In recent years, deep neural networks have led to exciting breakthroughs in speech recognition, computer vision, and natural language processing (NLP) tasks. However, there have been few positive results of deep models on ad-hoc retrieval tasks. This is partially due to the fact that many important characteristics of the ad-hoc retrieval task have not been well addressed in deep models yet. Typically, the ad-hoc retrieval task is formalized as a matching problem between two pieces of text in existing work using deep models, and treated equivalent to many NLP tasks such as paraphrase identification, question answering and automatic conversation. However, we argue that the ad-hoc retrieval task is mainly about relevance matching while most NLP matching tasks concern semantic matching, and there are some fundamental differences between these two matching tasks. Successful relevance matching requires proper handling of the exact matching signals, query term importance, and diverse matching requirements. In this paper, we propose a novel deep relevance matching model (DRMM) for ad-hoc retrieval. Specifically, our model employs a joint deep architecture at the query term level for relevance matching. By using matching histogram mapping, a feed forward matching network, and a term gating network, we can effectively deal with the three relevance matching factors mentioned above. Experimental results on two representative benchmark collections show that our model can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models.Comment: CIKM 2016, long pape

    Asynchronous Training of Word Embeddings for Large Text Corpora

    Full text link
    Word embeddings are a powerful approach for analyzing language and have been widely popular in numerous tasks in information retrieval and text mining. Training embeddings over huge corpora is computationally expensive because the input is typically sequentially processed and parameters are synchronously updated. Distributed architectures for asynchronous training that have been proposed either focus on scaling vocabulary sizes and dimensionality or suffer from expensive synchronization latencies. In this paper, we propose a scalable approach to train word embeddings by partitioning the input space instead in order to scale to massive text corpora while not sacrificing the performance of the embeddings. Our training procedure does not involve any parameter synchronization except a final sub-model merge phase that typically executes in a few minutes. Our distributed training scales seamlessly to large corpus sizes and we get comparable and sometimes even up to 45% performance improvement in a variety of NLP benchmarks using models trained by our distributed procedure which requires 1/101/10 of the time taken by the baseline approach. Finally we also show that we are robust to missing words in sub-models and are able to effectively reconstruct word representations.Comment: This paper contains 9 pages and has been accepted in the WSDM201

    Neural Collaborative Filtering

    Full text link
    In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.Comment: 10 pages, 7 figure
    corecore